NashAE: Disentangling Representations Through Adversarial Covariance Minimization

نویسندگان

چکیده

We present a self-supervised method to disentangle factors of variation in high-dimensional data that does not rely on prior knowledge the underlying profile (e.g., no assumptions number or distribution individual latent variables be extracted). In this which we call NashAE, feature disentanglement is accomplished low-dimensional space standard autoencoder (AE) by promoting discrepancy between each encoding element and information recovered from all other elements. Disentanglement promoted efficiently framing as minmax game AE an ensemble regression networks provide estimate conditioned observation quantitatively compare our approach with leading methods using existing metrics. Furthermore, show NashAE has increased reliability capacity capture salient characteristics learned representation.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Disentangling factors of variation in deep representations using adversarial training

We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from...

متن کامل

Adversarial Manipulation of Deep Representations

We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. I...

متن کامل

Adversarial Multiclass Classification: A Risk Minimization Perspective

Recently proposed adversarial classification methods have shown promising results for cost sensitive and multivariate losses. In contrast with empirical risk minimization (ERM) methods, which use convex surrogate losses to approximate the desired non-convex target loss function, adversarial methods minimize non-convex losses by treating the properties of the training data as being uncertain and...

متن کامل

Disentangling factors of variation in deep representation using adversarial training

We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from...

متن کامل

Kernel Feature Selection via Conditional Covariance Minimization

We propose a framework for feature selection that employs kernel-based measures of independence to find a subset of covariates that is maximally predictive of the response. Building on past work in kernel dimension reduction, we formulate our approach as a constrained optimization problem involving the trace of the conditional covariance operator, and additionally provide some consistency resul...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-19812-0_3